6 research outputs found

    A new configurational bias scheme for sampling supramolecular structures.

    Get PDF
    This is the author accepted manuscript. The final version is available from the American Institute of Physics via http://dx.doi.org/10.1063/1.4904727We present a new simulation scheme which allows an efficient sampling of reconfigurable supramolecular structures made of polymeric constructs functionalized by reactive binding sites. The algorithm is based on the configurational bias scheme of Siepmann and Frenkel and is powered by the possibility of changing the topology of the supramolecular network by a non-local Monte Carlo algorithm. Such a plan is accomplished by a multi-scale modelling that merges coarse-grained simulations, describing the typical polymer conformations, with experimental results accounting for free energy terms involved in the reactions of the active sites. We test the new algorithm for a system of DNA coated colloids for which we compute the hybridisation free energy cost associated to the binding of tethered single stranded DNAs terminated by short sequences of complementary nucleotides. In order to demonstrate the versatility of our method, we also consider polymers functionalized by receptors that bind a surface decorated by ligands. In particular, we compute the density of states of adsorbed polymers as a function of the number of ligand-receptor complexes formed. Such a quantity can be used to study the conformational properties of adsorbed polymers useful when engineering adsorption with tailored properties. We successfully compare the results with the predictions of a mean field theory. We believe that the proposed method will be a useful tool to investigate supramolecular structures resulting from direct interactions between functionalized polymers for which efficient numerical methodologies of investigation are still lacking

    26th Annual Computational Neuroscience Meeting (CNS*2017): Part 3 - Meeting Abstracts - Antwerp, Belgium. 15–20 July 2017

    Get PDF
    This work was produced as part of the activities of FAPESP Research,\ud Disseminations and Innovation Center for Neuromathematics (grant\ud 2013/07699-0, S. Paulo Research Foundation). NLK is supported by a\ud FAPESP postdoctoral fellowship (grant 2016/03855-5). ACR is partially\ud supported by a CNPq fellowship (grant 306251/2014-0)

    Computational capacity of a cerebellum model

    No full text
    Linking network structure to function is a long standing issuein the neuroscience field. An outstanding example is the cerebellum. Itsstructure was known in great detail for decades but the full range ofcomputations it performs is yet unknown. This reflects a need for newsystematic methods to characterize the computational capacities of thecerebellum. In the present work, we apply a method borrowed from thefield of machine learning to evaluate the computational capacity and theworking memory of a prototypical cerebellum model.The model that we study is a reservoir computing rate model ofthe cerebellar granular layer in which granule cells form a recurrentinhibitory network and Purkinje cells are modelled as linear trainablereadout neurons. It was introduced by [2, 3] to demonstrate how therecurrent dynamics of the granular layer is needed to perform typicalcerebellar tasks (e.g. timing-related tasks).The method, described in detail in [1], consists in feeding the modelwith a random time dependent input signal and then quantifying howwell a complete set of functions (each function representing a differenttype of computation) of the input signal can be reconstructed by taking alinear combination of the neuronal activations. We conducted simulationswith 1000 granule cells. Relevant parameters were optimized within a bio-logically plausible range using a Bayesian Learning approach. Our resultsshow that the cerebellum prototypical model can compute both linearfunctions - as expected from previous work -, and - surprisingly - highlynonlinear functions of its input (specifically, up to the 10th degree Legen-dre polynomial functions). Moreover, the model has a working memoryof the input up to 100 ms in the past. These two properties are essen-tial to perform typical cerebellar functions, such as fine-tuning nonlinearmotor control tasks or, we believe, even higher cognitive functions.info:eu-repo/semantics/publishe

    Computational capacity of a cerebellum model

    No full text
    Linking network structure to function is a long standing issue in the neuroscience field. An outstanding example is the cerebellum. Its structure was known in great detail for decades but the full range of computations it performs is yet unknown. This reflects a need for new systematic methods to characterize the computational capacities of the cerebellum. In the present work, we apply a method borrowed from the field of machine learning to evaluate the computational capacity and the working memory of a prototypical cerebellum model. The model that we study is a reservoir computing rate model of the cerebellar granular layer in which granule cells form a recurrent inhibitory network and Purkinje cells are modelled as linear trainable readout neurons. It was introduced by [2, 3] to demonstrate how the recurrent dynamics of the granular layer is needed to perform typical cerebellar tasks (e.g. timing-related tasks). The method, described in detail in [1], consists in feeding the model with a random time dependent input signal and then quantifying how well a complete set of functions (each function representing a different type of computation) of the input signal can be reconstructed by taking a linear combination of the neuronal activations. We conducted simulations with 1000 granule cells. Relevant parameters were optimized within a biologically plausible range using a Bayesian Learning approach. Our results show that the cerebellum prototypical model can compute both linear functions - as expected from previous work -, and - surprisingly - highly nonlinear functions of its input (specifically, up to the 10th degree Legendre polynomial functions). Moreover, the model has a working memory of the input up to 100 ms in the past. These two properties are essential to perform typical cerebellar functions, such as fine-tuning nonlinear motor control tasks or, we believe, even higher cognitive functions.SCOPUS: cp.kinfo:eu-repo/semantics/publishe
    corecore